Disable auth-failing account groups instead of removing them#78
Disable auth-failing account groups instead of removing them#78
Conversation
- Fix orphaned function_call_output errors during summary/compaction - Add GitHub API rate limit fallback for Codex instructions 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
**Fixed**: - Compaction losing context: Only filter orphaned function_call_output items, preserve matched pairs - Agent creation failing: Properly detect streaming vs non-streaming requests - SSE/JSON response handling: Convert SSE→JSON for generateText(), passthrough for streamText() **Added**: - gpt-5.1-chat-latest model support (normalizes to gpt-5.1) **Technical**: - Capture original stream value before transformation - API always gets stream=true, but response handling based on original intent - Orphan detection: only remove function_call_output without matching function_call 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
## Added - GPT 5.2 model family with 4 reasoning presets (low/medium/high/xhigh) - Full image input support for all 16 models via modalities config - GPT 5.2 model family in codex.ts with dedicated prompt handling ## Changed - Model ordering: GPT 5.2 → Codex Max → Codex → Codex Mini → GPT 5.1 - Removed default presets without explicit reasoning suffix - Updated @opencode-ai/plugin to ^1.0.150 (audit fix) - Test script now uses local dist and includes GPT 5.2 tests ## Tests - 193 unit tests + 16 integration tests (all passing) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…fix, and HTML version update This release combines fixes and features from PRs #62, #63, and #64: ### Added - "none" reasoning effort support for GPT-5.2 and GPT-5.1 general purpose models - gpt-5.2-none and gpt-5.1-none model mappings and presets (now 18 total models) - 4 new unit tests for "none" reasoning behavior (197 total tests) ### Fixed - Orphaned function_call_output 400 API errors - now converts orphans to assistant messages to preserve context while avoiding API errors - OAuth HTML version display updated from 1.0.4 to 4.1.0 ### Technical Details - getReasoningConfig() detects GPT-5.1 general purpose models and allows "none" - Codex variants auto-convert "none" to "low" (or "medium" for Codex Mini) - Orphan handling now works regardless of tools presence in request Contributors: @code-yeongyu (PR #63), @kanemontreuil (PR #64), @ben-vargas (PR #62) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Auth reliability + error handling alignment + request integrity fixes
Remap activeIndex based on the selected account key to avoid unexpected account switching when deduplicating stored accounts. Add regression tests for storage normalization and oauth-success copy, and refresh docs/CI labeling.
Rename the npm package/bin to opencode-openai-codex-auth-multiaccount and repoint installer/config/docs to the fork repo. This keeps upstream lean while preserving the multi-account rotation feature set.
Rename package to opencode-openai-codex-auth-multi
There was a problem hiding this comment.
Pull request overview
This PR changes the account-rotation/auth-failure behavior so same-refresh-token account variants are disabled with a reason instead of being removed, and hardens persistence/shutdown flows to reduce state-loss during overlapping saves and process termination.
Changes:
- Replace grouped account removal with
enabled: false+disabledReason: "auth-failure"and revive only auth-failure-disabled accounts on explicit login/restore. - Serialize account saves (
enqueueSave) and improveflushPendingSaveto recover from older write failures while draining queued saves. - Add shutdown cleanup hooks to flush pending saves and normalize storage shapes (omit unset optional fields, normalize invalid disabled reasons).
Reviewed changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| lib/accounts.ts | Introduces disabling by refresh token, trySetAccountEnabled, save serialization, and iterative flush logic. |
| index.ts | Switches to disable/revive semantics, tracks managers for shutdown flushing, and blocks manual re-enable of auth-failure disables. |
| lib/shutdown.ts | Makes cleanup idempotent and keeps signal handlers active through async cleanup. |
| lib/storage.ts | Normalizes enabled / disabledReason on load; strips stale metadata to match storage conventions. |
| lib/storage/migrations.ts | Adds AccountDisabledReason types + normalization helpers; v1→v3 migration defaults disabled accounts to "user". |
| lib/schemas.ts | Adds lenient DisabledReasonSchema preprocessing to avoid schema warnings on unknown reasons. |
| lib/constants.ts | Renames max auth failure constant to reflect disabling. |
| test/accounts.test.ts | Adds coverage for new disabling, enable/disable API, queued saves, and flush behaviors (incl. EBUSY). |
| test/index.test.ts | Adds coverage for disabled-pool messaging, revival semantics, shutdown flush, and manage-menu blocking. |
| test/index-retry.test.ts | Updates mocked manager interface for enabled-count selection. |
| test/shutdown.test.ts | Expands coverage for handler installation, idempotency, and cleanup draining. |
| test/storage.test.ts | Adds migration/normalization coverage for enabled omission and disabledReason normalization/stripping. |
| test/schemas.test.ts | Verifies schema-level normalization of unknown disabled reasons and no validation warnings. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
This PR changes account-auth failure handling to disable (and persist reasons for) same-refresh-token account groups instead of removing them, and strengthens shutdown/save reliability so disabled state is less likely to be lost.
Changes:
- Introduces
disabledReason(user|auth-failure) and normalizes/migrates storage to preserve “omit unset optional fields” while keeping legacy disabled accounts treated as user-disabled. - Switches auth refresh failure handling from group removal to group disable + improved user messaging and explicit-login revival semantics.
- Serializes account saves, adds shutdown-time flush of pending debounced saves, and hardens shutdown cleanup behavior (signals, reentrancy, timeouts), with expanded test coverage.
Reviewed changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| test/storage.test.ts | Adds migration/normalization tests for enabled + disabledReason, including flagged-account load behavior. |
| test/shutdown.test.ts | Expands shutdown behavior tests (reentrancy, signals during cleanup, timeout behavior, handler registration). |
| test/schemas.test.ts | Validates schema behavior for unknown/invalid disabledReason values without warnings. |
| test/index.test.ts | Updates plugin tests for disabled-pool messaging, grouped disable behavior, manager flushing, and manage-menu rules. |
| test/index-retry.test.ts | Aligns retry tests with enabled-account counting and shutdown cleanup isolation. |
| test/accounts.test.ts | Adds extensive AccountManager tests for disable reasons, grouped disabling, and save/flush serialization. |
| lib/storage/migrations.ts | Adds AccountDisabledReason utilities and V1→V3 migration defaults for legacy disabled accounts. |
| lib/storage.ts | Normalizes persisted enabled and disabledReason fields, and normalizes flagged storage accordingly. |
| lib/shutdown.ts | Makes cleanup re-entrant, installs/removes signal handlers, adds timeout-forced exit on stalled cleanup. |
| lib/schemas.ts | Adds lenient DisabledReasonSchema preprocessing for storage compatibility. |
| lib/constants.ts | Renames auth-failure threshold constant to disabling semantics. |
| lib/accounts.ts | Implements grouped disable, reason-aware enable/disable API, and queued/flushable serialized saves. |
| index.ts | Uses enabled-account counts in selection, disables groups on auth failures, revives only auth-failure disables on login, and flushes pending saves on shutdown. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
1 issue found across 3 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="test/index.test.ts">
<violation number="1" location="test/index.test.ts:2340">
P3: Add hasPendingSave/flushPendingSave/waitForPendingSaveToSettle stubs to this customManager mock; otherwise the shutdown cleanup treats it as having pending saves and calls flushPendingSave(), which is undefined, causing a teardown TypeError warning and leaving the flush path untested.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
|
@coderabbitai review @cubic-dev-ai please re-run review on the latest commit |
|
✅ Actions performedReview triggered.
|
|
@coderabbitai resume |
There was a problem hiding this comment.
Pull request overview
This PR keeps account groups visible after repeated auth refresh failures by disabling them (with a persisted reason) instead of removing them, while also hardening persistence by serializing saves and flushing debounced writes during shutdown.
Changes:
- Replace “remove after auth failure” with “disable and persist
disabledReason”, and revive onlyauth-failuredisables after explicit login/restore flows. - Serialize disk writes and improve shutdown cleanup to flush pending/debounced saves with retry behavior for transient Windows file locks.
- Normalize storage parsing/migration for
enabled/disabledReasonwhile preserving the “omit unset optional fields” storage shape.
Reviewed changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| lib/accounts.ts | Adds disable-with-reason logic and serialized save/flush machinery. |
| lib/constants.ts | Renames the auth-failure threshold const to reflect disabling behavior. |
| lib/shutdown.ts | Reworks signal/beforeExit handling to drain cleanup and force-exit after a timeout. |
| lib/storage/migrations.ts | Introduces AccountDisabledReason and normalization helpers; updates v1→v3 migration. |
| lib/storage.ts | Normalizes persisted enabled/disabledReason fields for accounts and flagged accounts. |
| lib/schemas.ts | Adds a lenient schema for disabledReason to avoid rejecting legacy/unknown values. |
| index.ts | Switches to disable flow, adds revive-on-login option, updates selection UX, and adds shutdown flush hook. |
| test/accounts.test.ts | Adds coverage for disable reasons, blocked re-enable, and queued/flush save behavior. |
| test/index.test.ts | Updates plugin behavior tests for disabled pools, revival, toasts, and shutdown flush. |
| test/index-retry.test.ts | Adjusts mocks for enabled-account counting. |
| test/schemas.test.ts | Covers schema normalization behavior for unknown disabledReason values. |
| test/shutdown.test.ts | Expands shutdown handler/cleanup state tests (signals, beforeExit, timeout). |
| test/storage.test.ts | Adds migration/normalization tests for enabled/disabledReason and flagged records. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| let timeoutHandle: ReturnType<typeof setTimeout> | null = setTimeout(() => { | ||
| requestExit(); | ||
| }, SIGNAL_CLEANUP_TIMEOUT_MS); | ||
| void runCleanup().finally(() => { | ||
| try { | ||
| requestExit(); | ||
| } finally { | ||
| // `process.exit()` normally terminates immediately. This only runs when exit | ||
| // is intercepted (for example in tests), so we need to unlatch signal state. | ||
| resetSignalStateAfterInterceptedExit(); | ||
| } | ||
| }); |
| const toastShown = await showToast(message, "warning"); | ||
| if (!toastShown) { | ||
| console.log("\nRun 'opencode auth login' to re-enable this account.\n"); | ||
| } |
| const waitForSettle = manager | ||
| .waitForPendingSaveToSettle() | ||
| .finally(() => { | ||
| trackedManagerSettledWaits.delete(manager); | ||
| if (!managerHasPendingSave(manager) && !activeAccountManagersForCleanup.has(manager)) { | ||
| trackedAccountManagersForCleanup.delete(manager); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/storage.ts (1)
1003-1037:⚠️ Potential issue | 🟠 MajorLegacy blocked-account migration loses auth-failure provenance here.
loadFlaggedAccounts()also routesopenai-codex-blocked-accounts.jsonthrough this normalizer. Those records predatedisabledReason, so this branch currently drops the auth-failure marker the new revive rules rely on; if a legacy record also omittedenabled, it can lose the disabled state too. This path should stamp legacy blocked entries asenabled: false/disabledReason: "auth-failure"instead of sending them through the generic normalization unchanged. UsingnormalizeStoredEnabled()here would also keep flagged storage on the same omit-when-enabled format as main storage.Possible direction
- const disabledReason = normalizeStoredAccountDisabledReason( - rawAccount.enabled, - rawAccount.disabledReason, - ); + const enabled = normalizeStoredEnabled(rawAccount.enabled); + const disabledReason = normalizeStoredAccountDisabledReason( + rawAccount.enabled, + rawAccount.disabledReason, + ); @@ - enabled: typeof rawAccount.enabled === "boolean" ? rawAccount.enabled : undefined, + enabled,// In the legacy `openai-codex-blocked-accounts.json` migration path: const migrated = normalizeFlaggedStorage(legacyData); for (const account of migrated.accounts) { account.enabled ??= false; account.disabledReason ??= "auth-failure"; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/storage.ts` around lines 1003 - 1037, The normalization currently drops legacy auth-failure provenance for records routed through loadFlaggedAccounts() by not stamping missing enabled/disabledReason values; update the normalization so that after computing disabledReason via normalizeStoredAccountDisabledReason(rawAccount.enabled, rawAccount.disabledReason) you set enabled and disabledReason defaults for legacy blocked entries: if disabledReason is undefined and rawAccount.enabled is undefined (legacy path), set normalized.enabled = normalizeStoredEnabled(false) (or otherwise set enabled = false using normalizeStoredEnabled convention) and normalized.disabledReason = "auth-failure"; prefer calling normalizeStoredEnabled() rather than hard-omitting to keep flagged storage format consistent. Ensure this change is applied where FlaggedAccountMetadataV1 is built (the block using normalizeStoredAccountDisabledReason and producing normalized) so legacy openai-codex-blocked-accounts.json records retain auth-failure provenance.
♻️ Duplicate comments (1)
lib/accounts.ts (1)
986-1049:⚠️ Potential issue | 🟠 MajorKeep the public save path inside the serialized lane.
enqueueSave()only protects callers that opt into it. BecausesaveToDisk()on Line 986 is still the raw writer, any direct caller can start a write thatpendingSaveandflushPendingSave()never observe, reopening the overlap/shutdown gap this queue is supposed to close.Proposed fix
- async saveToDisk(): Promise<void> { + async saveToDisk(): Promise<void> { + return this.enqueueSave(() => this.persistToDisk()); + } + + private async persistToDisk(): Promise<void> { const activeIndexByFamily: Partial<Record<ModelFamily, number>> = {}; for (const family of MODEL_FAMILIES) { const raw = this.currentAccountIndexByFamily[family]; @@ - await saveAccounts(storage); + await saveAccounts(storage); }Then switch internal queued call sites to
this.saveToDisk()orthis.persistToDisk()so there is only one raw write path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/accounts.ts` around lines 986 - 1049, The public save path must go through the queue to avoid unobserved writes: make the raw disk writer (currently saveToDisk) an internal/private method (rename to persistToDisk or mark private) and create a single public method saveToDisk that calls enqueueSave(() => this.persistToDisk()), then update all internal call sites to call this.saveToDisk() (or this.persistToDisk() for the internal implementation) so every write is funneled via enqueueSave and observed by pendingSave/flushPendingSave.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@index.ts`:
- Around line 1727-1754: Detached account managers are being removed from
trackedAccountManagersForCleanup too eagerly, which can drop a manager that
later enqueues its first save; in setCachedAccountManager and
invalidateAccountManagerCache stop deleting the detached manager from
trackedAccountManagersForCleanup when managerHasPendingSave(...) is
false—instead always keep the detached manager in
trackedAccountManagersForCleanup (and call
scheduleTrackedManagerPrune(cachedAccountManager) only when it has pending
saves) until an explicit “no more users”/finalized signal removes it; update
logic around cachedAccountManager, activeAccountManagersForCleanup,
trackedAccountManagersForCleanup, pruneTrackedAccountManagersForCleanup,
scheduleTrackedManagerPrune and accountManagerPromise accordingly.
- Around line 3609-3619: The logs in the shouldEnable branch leak a raw account
identifier via the accountId field in logWarn and logInfo; update the handlers
around shouldEnable / target.disabledReason === "auth-failure" to stop emitting
target.accountId directly—either remove the accountId property or replace it
with a redacted value/short suffix (e.g., last 4 chars) before calling logWarn
and logInfo (the surrounding symbols are shouldEnable, target.disabledReason,
target.accountId, logWarn, logInfo, and showToast); ensure showToast behavior is
unchanged but that logging no longer includes the full raw identifier.
- Around line 1715-1724: The finalizer currently deletes
trackedManagerSettledWaits for manager even when a new save started
concurrently, so the manager can be left un-pruned; change the finally callback
on waitForSettle (the Promise from manager.waitForPendingSaveToSettle()) to
re-arm a new waiter if work is still pending: after deleting
trackedManagerSettledWaits.delete(manager) and before possibly removing from
trackedAccountManagersForCleanup, check if managerHasPendingSave(manager) (or
activeAccountManagersForCleanup.has(manager) as appropriate) and if so create a
fresh promise by calling manager.waitForPendingSaveToSettle() and store it back
into trackedManagerSettledWaits.set(manager, newWait) so the manager will get
another prune pass; ensure you use the same trackedManagerSettledWaits and
managerHasPendingSave/activeAccountManagersForCleanup symbols in this logic.
In `@lib/accounts.ts`:
- Around line 947-975: The code currently clears cooldown metadata
unconditionally when handling enabled=true in trySetAccountEnabled; change it to
only clear cooldown when performing a real re-enable transition (account.enabled
was false and is being set to true). In trySetAccountEnabled, capture the
account's prior enabled state (e.g., const wasEnabled = account.enabled) before
mutating, then only call this.clearAccountCooldown(account) and delete
account.disabledReason when enabled && !wasEnabled (preserving the auth-failure
and other branches as-is).
In `@test/index.test.ts`:
- Around line 3667-3782: Update the assertion for the legacy-disabled account to
match the normalized storage contract: for the revived entry with accountId
"org-legacy" (found from revivedEntries), assert that enabled is false and
disabledReason is "user" (instead of expecting undefined), and retain the checks
for coolingDownUntil and cooldownReason; this ensures the test verifies the
legacy-disable migration normalizes undefined reason to "user" for entries where
enabled === false.
---
Outside diff comments:
In `@lib/storage.ts`:
- Around line 1003-1037: The normalization currently drops legacy auth-failure
provenance for records routed through loadFlaggedAccounts() by not stamping
missing enabled/disabledReason values; update the normalization so that after
computing disabledReason via
normalizeStoredAccountDisabledReason(rawAccount.enabled,
rawAccount.disabledReason) you set enabled and disabledReason defaults for
legacy blocked entries: if disabledReason is undefined and rawAccount.enabled is
undefined (legacy path), set normalized.enabled = normalizeStoredEnabled(false)
(or otherwise set enabled = false using normalizeStoredEnabled convention) and
normalized.disabledReason = "auth-failure"; prefer calling
normalizeStoredEnabled() rather than hard-omitting to keep flagged storage
format consistent. Ensure this change is applied where FlaggedAccountMetadataV1
is built (the block using normalizeStoredAccountDisabledReason and producing
normalized) so legacy openai-codex-blocked-accounts.json records retain
auth-failure provenance.
---
Duplicate comments:
In `@lib/accounts.ts`:
- Around line 986-1049: The public save path must go through the queue to avoid
unobserved writes: make the raw disk writer (currently saveToDisk) an
internal/private method (rename to persistToDisk or mark private) and create a
single public method saveToDisk that calls enqueueSave(() =>
this.persistToDisk()), then update all internal call sites to call
this.saveToDisk() (or this.persistToDisk() for the internal implementation) so
every write is funneled via enqueueSave and observed by
pendingSave/flushPendingSave.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 9fddc73e-c8f2-49b7-992e-be65efbaa709
📒 Files selected for processing (11)
index.tslib/accounts.tslib/schemas.tslib/shutdown.tslib/storage.tslib/storage/migrations.tstest/accounts.test.tstest/index.test.tstest/schemas.test.tstest/shutdown.test.tstest/storage.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- lib/schemas.ts
| const waitForSettle = manager | ||
| .waitForPendingSaveToSettle() | ||
| .finally(() => { | ||
| trackedManagerSettledWaits.delete(manager); | ||
| if (!managerHasPendingSave(manager) && !activeAccountManagersForCleanup.has(manager)) { | ||
| trackedAccountManagersForCleanup.delete(manager); | ||
| } | ||
| }); | ||
|
|
||
| trackedManagerSettledWaits.set(manager, waitForSettle); |
There was a problem hiding this comment.
Re-arm the stale-manager waiter when work is still pending.
The finalizer clears trackedManagerSettledWaits, but if a new save is already pending at that point the manager never gets another prune pass and can sit in trackedAccountManagersForCleanup until shutdown.
♻️ Proposed fix
const waitForSettle = manager
.waitForPendingSaveToSettle()
.finally(() => {
trackedManagerSettledWaits.delete(manager);
- if (!managerHasPendingSave(manager) && !activeAccountManagersForCleanup.has(manager)) {
- trackedAccountManagersForCleanup.delete(manager);
- }
+ if (activeAccountManagersForCleanup.has(manager)) {
+ return;
+ }
+ if (managerHasPendingSave(manager)) {
+ scheduleTrackedManagerPrune(manager);
+ return;
+ }
+ trackedAccountManagersForCleanup.delete(manager);
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@index.ts` around lines 1715 - 1724, The finalizer currently deletes
trackedManagerSettledWaits for manager even when a new save started
concurrently, so the manager can be left un-pruned; change the finally callback
on waitForSettle (the Promise from manager.waitForPendingSaveToSettle()) to
re-arm a new waiter if work is still pending: after deleting
trackedManagerSettledWaits.delete(manager) and before possibly removing from
trackedAccountManagersForCleanup, check if managerHasPendingSave(manager) (or
activeAccountManagersForCleanup.has(manager) as appropriate) and if so create a
fresh promise by calling manager.waitForPendingSaveToSettle() and store it back
into trackedManagerSettledWaits.set(manager, newWait) so the manager will get
another prune pass; ensure you use the same trackedManagerSettledWaits and
managerHasPendingSave/activeAccountManagersForCleanup symbols in this logic.
| it("preserves user-disabled and legacy-disabled variants while reviving auth-failure disables on explicit login", async () => { | ||
| const accountsModule = await import("../lib/accounts.js"); | ||
| const authModule = await import("../lib/auth/auth.js"); | ||
|
|
||
| vi.mocked(authModule.exchangeAuthorizationCode) | ||
| .mockResolvedValueOnce({ | ||
| type: "success", | ||
| access: "access-no-org-1", | ||
| refresh: "refresh-shared", | ||
| expires: Date.now() + 300_000, | ||
| idToken: "id-no-org-1", | ||
| }) | ||
| .mockResolvedValueOnce({ | ||
| type: "success", | ||
| access: "access-no-org-2", | ||
| refresh: "refresh-shared", | ||
| expires: Date.now() + 300_000, | ||
| idToken: "id-no-org-2", | ||
| }); | ||
| vi.mocked(accountsModule.getAccountIdCandidates).mockReturnValue([]); | ||
| vi.mocked(accountsModule.extractAccountId) | ||
| .mockReturnValueOnce("account-1") | ||
| .mockReturnValueOnce(undefined); | ||
|
|
||
| const mockClient = createMockClient(); | ||
| const { OpenAIOAuthPlugin } = await import("../index.js"); | ||
| const plugin = await OpenAIOAuthPlugin({ client: mockClient } as never) as unknown as PluginType; | ||
| const autoMethod = plugin.auth.methods[0] as unknown as { | ||
| authorize: (inputs?: Record<string, string>) => Promise<{ instructions: string }>; | ||
| }; | ||
|
|
||
| await autoMethod.authorize({ loginMode: "add", accountCount: "1" }); | ||
| await autoMethod.authorize({ loginMode: "add", accountCount: "1" }); | ||
|
|
||
| expect(mockStorage.accounts).toHaveLength(1); | ||
| mockStorage.accounts = [ | ||
| { | ||
| accountId: "unrelated-disabled", | ||
| organizationId: "org-unrelated", | ||
| email: "unrelated@example.com", | ||
| refreshToken: "different-refresh", | ||
| enabled: false, | ||
| disabledReason: "user", | ||
| addedAt: 9, | ||
| lastUsed: 9, | ||
| }, | ||
| { | ||
| accountId: "org-shared", | ||
| organizationId: "org-keep", | ||
| email: "org@example.com", | ||
| refreshToken: "shared-refresh", | ||
| enabled: false, | ||
| disabledReason: "auth-failure", | ||
| coolingDownUntil: 12_000, | ||
| cooldownReason: "auth-failure", | ||
| addedAt: 10, | ||
| lastUsed: 10, | ||
| }, | ||
| { | ||
| accountId: "org-sibling", | ||
| organizationId: "org-sibling", | ||
| email: "sibling@example.com", | ||
| refreshToken: "shared-refresh", | ||
| enabled: false, | ||
| disabledReason: "user", | ||
| addedAt: 11, | ||
| lastUsed: 11, | ||
| }, | ||
| { | ||
| accountId: "org-legacy", | ||
| organizationId: "org-legacy", | ||
| email: "legacy@example.com", | ||
| refreshToken: "shared-refresh", | ||
| enabled: false, | ||
| coolingDownUntil: 18_000, | ||
| cooldownReason: "auth-failure", | ||
| addedAt: 12, | ||
| lastUsed: 12, | ||
| }, | ||
| ]; | ||
|
|
||
| vi.mocked(authModule.exchangeAuthorizationCode).mockResolvedValueOnce({ | ||
| type: "success", | ||
| access: "access-shared-refresh", | ||
| refresh: "shared-refresh", | ||
| expires: Date.now() + 300_000, | ||
| idToken: "id-shared-refresh", | ||
| }); | ||
| vi.mocked(accountsModule.getAccountIdCandidates).mockReturnValueOnce([]); | ||
| vi.mocked(accountsModule.extractAccountId).mockImplementation((accessToken) => | ||
| accessToken === "access-shared-refresh" ? "org-shared" : "account-1", | ||
| ); | ||
|
|
||
| const mockClient = createMockClient(); | ||
| const { OpenAIOAuthPlugin } = await import("../index.js"); | ||
| const plugin = (await OpenAIOAuthPlugin({ | ||
| client: mockClient, | ||
| } as never)) as unknown as PluginType; | ||
| const autoMethod = plugin.auth.methods[0] as unknown as { | ||
| authorize: (inputs?: Record<string, string>) => Promise<{ instructions: string }>; | ||
| }; | ||
|
|
||
| await autoMethod.authorize({ loginMode: "add", accountCount: "1" }); | ||
|
|
||
| const revivedEntries = mockStorage.accounts.filter( | ||
| (account) => account.refreshToken === "shared-refresh", | ||
| ); | ||
| expect(revivedEntries).toHaveLength(3); | ||
| expect( | ||
| revivedEntries.find((account) => account.accountId === "org-shared"), | ||
| ).toMatchObject({ | ||
| accountId: "org-shared", | ||
| enabled: undefined, | ||
| disabledReason: undefined, | ||
| coolingDownUntil: undefined, | ||
| cooldownReason: undefined, | ||
| }); | ||
| expect( | ||
| revivedEntries.find((account) => account.accountId === "org-sibling"), | ||
| ).toMatchObject({ | ||
| accountId: "org-sibling", | ||
| enabled: false, | ||
| disabledReason: "user", | ||
| }); | ||
| expect( | ||
| revivedEntries.find((account) => account.accountId === "org-legacy"), | ||
| ).toMatchObject({ | ||
| accountId: "org-legacy", | ||
| enabled: false, | ||
| coolingDownUntil: 18_000, | ||
| cooldownReason: "auth-failure", | ||
| }); | ||
| expect( | ||
| revivedEntries.find((account) => account.accountId === "org-legacy")?.disabledReason, | ||
| ).toBeUndefined(); | ||
| expect( | ||
| revivedEntries.find((account) => account.accountId === "org-shared")?.accessToken, | ||
| ).toBe("access-shared-refresh"); | ||
| expect( | ||
| mockStorage.accounts.find((account) => account.accountId === "unrelated-disabled"), | ||
| ).toMatchObject({ | ||
| accountId: "unrelated-disabled", | ||
| refreshToken: "different-refresh", | ||
| enabled: false, | ||
| disabledReason: "user", | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Keep the legacy-disabled expectation aligned with normalized storage.
Line 3769 currently expects org-legacy to keep disabledReason === undefined, but this PR’s normalization contract treats enabled: false without a reason as "user". Leaving the assertion as-is makes the test pass on a state production should already have canonicalized, so it can miss regressions in the legacy-disable migration path.
Proposed fix
mockStorage.accounts = [
@@
{
accountId: "org-legacy",
organizationId: "org-legacy",
email: "legacy@example.com",
refreshToken: "shared-refresh",
enabled: false,
+ disabledReason: "user",
coolingDownUntil: 18_000,
cooldownReason: "auth-failure",
addedAt: 12,
lastUsed: 12,
},
@@
- expect(
- revivedEntries.find((account) => account.accountId === "org-legacy")?.disabledReason,
- ).toBeUndefined();
+ expect(
+ revivedEntries.find((account) => account.accountId === "org-legacy")?.disabledReason,
+ ).toBe("user");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/index.test.ts` around lines 3667 - 3782, Update the assertion for the
legacy-disabled account to match the normalized storage contract: for the
revived entry with accountId "org-legacy" (found from revivedEntries), assert
that enabled is false and disabledReason is "user" (instead of expecting
undefined), and retain the checks for coolingDownUntil and cooldownReason; this
ensures the test verifies the legacy-disable migration normalizes undefined
reason to "user" for entries where enabled === false.
✅ Actions performedReviews resumed. |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
lib/accounts.ts (1)
997-1036:⚠️ Potential issue | 🟠 Major
saveToDisk()still bypasses the serialized save lane.
pendingSaveis only set throughenqueueSave, butsaveToDisk()remains a direct writer. Any direct call can overlap queued writes, andflushPendingSave()may conclude without waiting for that write path.🔧 Refactor direction
- async saveToDisk(): Promise<void> { + private async writeToDisk(): Promise<void> { // existing storage serialization + saveAccounts(...) } + async saveToDisk(): Promise<void> { + return this.enqueueSave(() => this.writeToDisk()); + }Then replace explicit queue wrappers that currently call
this.saveToDisk()(e.g., Line 287, Line 1121, Line 1192) with directthis.saveToDisk()calls to avoid double-queuing.Also applies to: 1038-1060
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/accounts.ts` around lines 997 - 1036, saveToDisk currently writes directly and can race with the queued save lane because pendingSave is only set via enqueueSave; fix this by funneling all disk writes through the serialized save path: make saveToDisk an internal writer (or otherwise stop calling it directly), update every direct caller to call enqueueSave() instead (so pendingSave is set and the queue serializes writes), and ensure flushPendingSave awaits the pendingSave promise before returning; refer to the methods saveToDisk, enqueueSave, pendingSave, flushPendingSave and the downstream saveAccounts call when making these changes.index.ts (1)
1733-1760:⚠️ Potential issue | 🟠 MajorDetached managers can still fall out of shutdown tracking too early.
At Line 1741 and Line 1759, detached managers are removed from
trackedAccountManagersForCleanupwhen they are idle at detach time. In-flight code paths can still enqueue a first debounced save on that detached instance later, which would then be missed by shutdown flushing. Keep detached managers tracked until a true finalization boundary.🔧 Suggested fix
const setCachedAccountManager = (manager: AccountManager): AccountManager => { - pruneTrackedAccountManagersForCleanup(); if (cachedAccountManager && cachedAccountManager !== manager) { activeAccountManagersForCleanup.delete(cachedAccountManager); - if (managerHasPendingSave(cachedAccountManager)) { - trackedAccountManagersForCleanup.add(cachedAccountManager); - scheduleTrackedManagerPrune(cachedAccountManager); - } else { - trackedAccountManagersForCleanup.delete(cachedAccountManager); - } + trackedAccountManagersForCleanup.add(cachedAccountManager); + if (managerHasPendingSave(cachedAccountManager)) { + scheduleTrackedManagerPrune(cachedAccountManager); + } } activeAccountManagersForCleanup.add(manager); trackedAccountManagersForCleanup.delete(manager); cachedAccountManager = manager; accountManagerPromise = Promise.resolve(manager); return manager; }; const invalidateAccountManagerCache = (): void => { - pruneTrackedAccountManagersForCleanup(); if (cachedAccountManager) { activeAccountManagersForCleanup.delete(cachedAccountManager); - if (managerHasPendingSave(cachedAccountManager)) { - trackedAccountManagersForCleanup.add(cachedAccountManager); - scheduleTrackedManagerPrune(cachedAccountManager); - } else { - trackedAccountManagersForCleanup.delete(cachedAccountManager); - } + trackedAccountManagersForCleanup.add(cachedAccountManager); + if (managerHasPendingSave(cachedAccountManager)) { + scheduleTrackedManagerPrune(cachedAccountManager); + } } cachedAccountManager = null; accountManagerPromise = null; };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@index.ts` around lines 1733 - 1760, The detach logic in setCachedAccountManager and invalidateAccountManagerCache prematurely removes detached managers from trackedAccountManagersForCleanup when they're idle (lines around managerHasPendingSave branches), which allows in-flight debounced saves to be missed; update both functions (setCachedAccountManager and invalidateAccountManagerCache) to never call trackedAccountManagersForCleanup.delete(cachedAccountManager) for a detached manager — instead, if managerHasPendingSave(cachedAccountManager) add it and scheduleTrackedManagerPrune(cachedAccountManager), otherwise still keep it tracked (do not delete) so it remains part of shutdown flushing until a true finalization boundary; keep activeAccountManagersForCleanup.delete and other existing behavior but remove the delete-from-tracked calls and ensure tracked managers stay present until finalized.test/index.test.ts (1)
3705-3714:⚠️ Potential issue | 🟡 MinorAlign legacy-disabled assertion with normalized storage behavior.
Line 3709 sets a legacy disabled account (
enabled: false) withoutdisabledReason, but Line 3769 still expectsdisabledReasonto remainundefined. That conflicts with this PR’s normalization rule (enabled: false+ missing reason ⇒"user"), so the test currently validates non-canonical state.Proposed fix
{ accountId: "org-legacy", organizationId: "org-legacy", email: "legacy@example.com", refreshToken: "shared-refresh", enabled: false, + disabledReason: "user", coolingDownUntil: 18_000, cooldownReason: "auth-failure", addedAt: 12, lastUsed: 12, }, @@ expect( revivedEntries.find((account) => account.accountId === "org-legacy")?.disabledReason, - ).toBeUndefined(); + ).toBe("user");Also applies to: 3768-3770
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/index.test.ts` around lines 3705 - 3714, The test sets a legacy disabled account object (accountId "org-legacy") with enabled: false but no disabledReason, yet later assertions still expect disabledReason to be undefined; update the test to match the normalization rule by expecting disabledReason === "user" for that account (or alternatively add disabledReason: "user" to the fixture), and apply the same change to the other duplicate assertion block referenced around the second occurrence (the one around lines 3768-3770) so both the fixture and assertion align with the canonical normalization behavior.
🧹 Nitpick comments (1)
test/index.test.ts (1)
2121-2121: Prefer typed stub contracts withsatisfiesoveras neverfor AccountManager mocks.The
as nevercasts disable type checking for partial mock implementations. Since AccountManager includesflushPendingSave,waitForPendingSaveToSettle, and other public methods not present in these test stubs, new API additions won't cause compilation errors, risking silent interface drift. Create a typed AccountManager stub contract and usesatisfiesto maintain compile-time verification of mock completeness.Also applies to: 2284, 2385, 2463, 2910
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/index.test.ts` at line 2121, Replace the unsafe "as never" cast on the mocked AccountManager with a typed stub that uses TypeScript's "satisfies" to ensure the stub conforms to AccountManager; specifically, create a minimal stub object (e.g., customManager) typed as a partial but checked with "satisfies AccountManager" and include the public methods referenced by the real interface (flushPendingSave, waitForPendingSaveToSettle, etc.), then pass that stub into vi.spyOn(AccountManager, "loadFromDisk").mockResolvedValueOnce(...) so the compiler will catch missing or changed public methods instead of using "as never".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/shutdown.test.ts`:
- Around line 438-451: Wrap the process.on spy setup/teardown in a try/finally
to guarantee restoration: when creating the spy (processOnSpy =
vi.spyOn(process, "on")...) and importing registerCleanup/freshRegister, run the
test assertions inside a try block and call processOnSpy.mockRestore() in the
finally block so the spy is always restored even if expectations fail; apply the
same change to the other test block around freshRegister usage (the block
covering the similar assertions at 453-473).
---
Duplicate comments:
In `@index.ts`:
- Around line 1733-1760: The detach logic in setCachedAccountManager and
invalidateAccountManagerCache prematurely removes detached managers from
trackedAccountManagersForCleanup when they're idle (lines around
managerHasPendingSave branches), which allows in-flight debounced saves to be
missed; update both functions (setCachedAccountManager and
invalidateAccountManagerCache) to never call
trackedAccountManagersForCleanup.delete(cachedAccountManager) for a detached
manager — instead, if managerHasPendingSave(cachedAccountManager) add it and
scheduleTrackedManagerPrune(cachedAccountManager), otherwise still keep it
tracked (do not delete) so it remains part of shutdown flushing until a true
finalization boundary; keep activeAccountManagersForCleanup.delete and other
existing behavior but remove the delete-from-tracked calls and ensure tracked
managers stay present until finalized.
In `@lib/accounts.ts`:
- Around line 997-1036: saveToDisk currently writes directly and can race with
the queued save lane because pendingSave is only set via enqueueSave; fix this
by funneling all disk writes through the serialized save path: make saveToDisk
an internal writer (or otherwise stop calling it directly), update every direct
caller to call enqueueSave() instead (so pendingSave is set and the queue
serializes writes), and ensure flushPendingSave awaits the pendingSave promise
before returning; refer to the methods saveToDisk, enqueueSave, pendingSave,
flushPendingSave and the downstream saveAccounts call when making these changes.
In `@test/index.test.ts`:
- Around line 3705-3714: The test sets a legacy disabled account object
(accountId "org-legacy") with enabled: false but no disabledReason, yet later
assertions still expect disabledReason to be undefined; update the test to match
the normalization rule by expecting disabledReason === "user" for that account
(or alternatively add disabledReason: "user" to the fixture), and apply the same
change to the other duplicate assertion block referenced around the second
occurrence (the one around lines 3768-3770) so both the fixture and assertion
align with the canonical normalization behavior.
---
Nitpick comments:
In `@test/index.test.ts`:
- Line 2121: Replace the unsafe "as never" cast on the mocked AccountManager
with a typed stub that uses TypeScript's "satisfies" to ensure the stub conforms
to AccountManager; specifically, create a minimal stub object (e.g.,
customManager) typed as a partial but checked with "satisfies AccountManager"
and include the public methods referenced by the real interface
(flushPendingSave, waitForPendingSaveToSettle, etc.), then pass that stub into
vi.spyOn(AccountManager, "loadFromDisk").mockResolvedValueOnce(...) so the
compiler will catch missing or changed public methods instead of using "as
never".
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 265dced9-8d3c-4123-ae30-7a54a773b6bb
📒 Files selected for processing (6)
index.tslib/accounts.tslib/shutdown.tstest/accounts.test.tstest/index.test.tstest/shutdown.test.ts
| it("signal handlers are only registered once", async () => { | ||
| const processOnceSpy = vi.spyOn(process, "once").mockImplementation(() => process); | ||
| const processOnSpy = vi.spyOn(process, "on").mockImplementation(() => process); | ||
|
|
||
| vi.resetModules(); | ||
| const { registerCleanup: freshRegister } = await import("../lib/shutdown.js"); | ||
|
|
||
| freshRegister(() => {}); | ||
| const firstCallCount = processOnceSpy.mock.calls.length; | ||
| const firstCallCount = processOnSpy.mock.calls.length; | ||
|
|
||
| freshRegister(() => {}); | ||
| expect(processOnSpy.mock.calls.length).toBe(firstCallCount); | ||
|
|
||
| processOnSpy.mockRestore(); | ||
| }); |
There was a problem hiding this comment.
Use try/finally for spy restoration in these tests.
If an assertion throws before the trailing mockRestore() calls, process spies leak into later tests and can create misleading failures.
💡 Suggested hardening
it("signal handlers are only registered once", async () => {
const processOnSpy = vi.spyOn(process, "on").mockImplementation(() => process);
-
- vi.resetModules();
- const { registerCleanup: freshRegister } = await import("../lib/shutdown.js");
-
- freshRegister(() => {});
- const firstCallCount = processOnSpy.mock.calls.length;
-
- freshRegister(() => {});
- expect(processOnSpy.mock.calls.length).toBe(firstCallCount);
-
- processOnSpy.mockRestore();
+ try {
+ vi.resetModules();
+ const { registerCleanup: freshRegister } = await import("../lib/shutdown.js");
+ freshRegister(() => {});
+ const firstCallCount = processOnSpy.mock.calls.length;
+ freshRegister(() => {});
+ expect(processOnSpy.mock.calls.length).toBe(firstCallCount);
+ } finally {
+ processOnSpy.mockRestore();
+ }
});Also applies to: 453-473
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/shutdown.test.ts` around lines 438 - 451, Wrap the process.on spy
setup/teardown in a try/finally to guarantee restoration: when creating the spy
(processOnSpy = vi.spyOn(process, "on")...) and importing
registerCleanup/freshRegister, run the test assertions inside a try block and
call processOnSpy.mockRestore() in the finally block so the spy is always
restored even if expectations fail; apply the same change to the other test
block around freshRegister usage (the block covering the similar assertions at
453-473).
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/accounts.ts (1)
1117-1131:⚠️ Potential issue | 🟠 MajorRetain retry intent after debounced save failures
At Line 1125 and Line 1127, debounced save failures are only logged. Once
pendingSavesettles,hasPendingSave()(Line 1066) becomes false, so a laterflushPendingSave()(Line 1134) can return without retrying unsaved state. This can drop critical disable-state persistence after transientEBUSY/lock failures.💡 Proposed fix (track dirty-save retry state)
class AccountManager { + private saveRetryNeeded = false; private saveDebounceTimer: ReturnType<typeof setTimeout> | null = null; private pendingSave: Promise<void> | null = null; @@ hasPendingSave(): boolean { - return this.saveDebounceTimer !== null || this.pendingSave !== null; + return this.saveRetryNeeded || this.saveDebounceTimer !== null || this.pendingSave !== null; } @@ saveToDiskDebounced(delayMs = 500): void { + this.saveRetryNeeded = true; if (this.saveDebounceTimer) { clearTimeout(this.saveDebounceTimer); } this.saveDebounceTimer = setTimeout(() => { this.saveDebounceTimer = null; const doSave = async () => { try { await this.saveToDisk(); + this.saveRetryNeeded = false; } catch (error) { + this.saveRetryNeeded = true; log.warn("Debounced save failed", { error: error instanceof Error ? error.message : String(error) }); } }; void doSave(); }, delayMs); } @@ async flushPendingSave(options?: FlushPendingSaveOptions): Promise<void> { @@ - const hadDebouncedSave = !!this.saveDebounceTimer; + const hadDebouncedSave = this.saveRetryNeeded || !!this.saveDebounceTimer; @@ const flushSave = this.saveToDisk(); @@ if (flushSaveError) { throw flushSaveError; } + this.saveRetryNeeded = false;Also applies to: 1066-1068, 1134-1221
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/accounts.ts` around lines 1117 - 1131, The debounced save currently logs failures in saveToDiskDebounced (inside saveDebounceTimer) but doesn't preserve retry intent, so hasPendingSave() becomes false and flushPendingSave() may skip retrying; to fix, introduce a retry-tracking flag (e.g., this.pendingSaveNeedsRetry or this.dirtySave) and update saveToDiskDebounced: on save failure set this.pendingSaveNeedsRetry = true (and do not clear retry intent), and on successful save clear it; update hasPendingSave() to return true if this.pendingSave or this.pendingSaveNeedsRetry is set, and update flushPendingSave() to attempt a retry when pendingSaveNeedsRetry is true (and clear the flag only after a successful save). Ensure symbols mentioned (saveToDiskDebounced, saveDebounceTimer, saveToDisk, pendingSave, hasPendingSave, flushPendingSave) are updated consistently.
♻️ Duplicate comments (1)
test/index.test.ts (1)
3769-3770:⚠️ Potential issue | 🟡 MinorLegacy-disabled assertion still conflicts with normalized storage behavior.
This still expects
disabledReasonto beundefinedfororg-legacy, but this PR’s normalization contract treats legacyenabled: falseentries without a reason as"user".Suggested assertion fix
- expect( - revivedEntries.find((account) => account.accountId === "org-legacy")?.disabledReason, - ).toBeUndefined(); + expect( + revivedEntries.find((account) => account.accountId === "org-legacy")?.disabledReason, + ).toBe("user");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/index.test.ts` around lines 3769 - 3770, The test incorrectly asserts that revivedEntries.find((account) => account.accountId === "org-legacy")?.disabledReason is undefined despite the normalization logic treating legacy entries with enabled: false and no reason as "user"; update the assertion to expect "user" (or the normalized constant used in the codebase) for the disabledReason of the account with accountId "org-legacy" so it matches the normalization contract applied in the code that produces revivedEntries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@index.ts`:
- Around line 3867-3870: After calling
persistAccountPool(selection.variantsForPersistence, ...) you must invalidate or
refresh the in-memory AccountManager cache so the runtime sees the persisted
enable/disable changes immediately; locate the usage of persistAccountPool and
add a call to the AccountManager cache refresh method (e.g.
AccountManager.invalidateCache(), AccountManager.reloadFromDisk(), or
accountManager.refreshAccounts()) immediately after the await succeeds (and not
only in other auth paths), ensuring the same error handling behavior is
preserved so stale in-memory account state is not retained.
In `@test/index.test.ts`:
- Around line 361-374: The mock's disableAccountsWithSameRefreshToken does not
reset the per-refresh-token auth-failure counter like production does; update
disableAccountsWithSameRefreshToken to also clear the failure entry for the
given refreshToken (mirror lib/accounts.ts behavior) — e.g., after determining
refreshToken and before returning, remove or reset the auth-failure tracking
entry (the same map/key used by the production code) so that sibling accounts
have the same reset parity as production.
---
Outside diff comments:
In `@lib/accounts.ts`:
- Around line 1117-1131: The debounced save currently logs failures in
saveToDiskDebounced (inside saveDebounceTimer) but doesn't preserve retry
intent, so hasPendingSave() becomes false and flushPendingSave() may skip
retrying; to fix, introduce a retry-tracking flag (e.g.,
this.pendingSaveNeedsRetry or this.dirtySave) and update saveToDiskDebounced: on
save failure set this.pendingSaveNeedsRetry = true (and do not clear retry
intent), and on successful save clear it; update hasPendingSave() to return true
if this.pendingSave or this.pendingSaveNeedsRetry is set, and update
flushPendingSave() to attempt a retry when pendingSaveNeedsRetry is true (and
clear the flag only after a successful save). Ensure symbols mentioned
(saveToDiskDebounced, saveDebounceTimer, saveToDisk, pendingSave,
hasPendingSave, flushPendingSave) are updated consistently.
---
Duplicate comments:
In `@test/index.test.ts`:
- Around line 3769-3770: The test incorrectly asserts that
revivedEntries.find((account) => account.accountId ===
"org-legacy")?.disabledReason is undefined despite the normalization logic
treating legacy entries with enabled: false and no reason as "user"; update the
assertion to expect "user" (or the normalized constant used in the codebase) for
the disabledReason of the account with accountId "org-legacy" so it matches the
normalization contract applied in the code that produces revivedEntries.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 247abe40-2571-417b-a590-a18aded5f2d7
📒 Files selected for processing (4)
index.tslib/accounts.tstest/accounts.test.tstest/index.test.ts
| disableAccountsWithSameRefreshToken(account: { refreshToken?: string }) { | ||
| const refreshToken = account.refreshToken; | ||
| let disabledCount = 0; | ||
| for (const storedAccount of this.accounts) { | ||
| if (storedAccount.refreshToken !== refreshToken) continue; | ||
| if (storedAccount.enabled === false) continue; | ||
| delete storedAccount.coolingDownUntil; | ||
| delete storedAccount.cooldownReason; | ||
| storedAccount.enabled = false; | ||
| storedAccount.disabledReason = "auth-failure"; | ||
| disabledCount++; | ||
| } | ||
| return disabledCount; | ||
| } |
There was a problem hiding this comment.
Mock disable flow is still missing auth-failure counter reset parity.
disableAccountsWithSameRefreshToken in this mock disables siblings, but it does not mirror production’s per-refresh-token auth-failure reset (lib/accounts.ts clears the token’s failure entry). That gap can hide regressions around re-enable/retry flows after disable.
Suggested parity update
class MockAccountManager {
+ private authFailuresByRefreshToken = new Map<string, number>();
+
- clearAuthFailures() {}
- incrementAuthFailures() { return 1; }
+ clearAuthFailures(account: { refreshToken?: string }) {
+ if (!account.refreshToken) return;
+ this.authFailuresByRefreshToken.delete(account.refreshToken);
+ }
+ incrementAuthFailures(account: { refreshToken?: string }) {
+ const refreshToken = account.refreshToken ?? "";
+ const current = this.authFailuresByRefreshToken.get(refreshToken) ?? 0;
+ const next = current + 1;
+ this.authFailuresByRefreshToken.set(refreshToken, next);
+ return next;
+ }
disableAccountsWithSameRefreshToken(account: { refreshToken?: string }) {
const refreshToken = account.refreshToken;
let disabledCount = 0;
for (const storedAccount of this.accounts) {
if (storedAccount.refreshToken !== refreshToken) continue;
if (storedAccount.enabled === false) continue;
delete storedAccount.coolingDownUntil;
delete storedAccount.cooldownReason;
storedAccount.enabled = false;
storedAccount.disabledReason = "auth-failure";
disabledCount++;
}
+ if (refreshToken) {
+ this.authFailuresByRefreshToken.delete(refreshToken);
+ }
return disabledCount;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/index.test.ts` around lines 361 - 374, The mock's
disableAccountsWithSameRefreshToken does not reset the per-refresh-token
auth-failure counter like production does; update
disableAccountsWithSameRefreshToken to also clear the failure entry for the
given refreshToken (mirror lib/accounts.ts behavior) — e.g., after determining
refreshToken and before returning, remove or reset the auth-failure tracking
entry (the same map/key used by the production code) so that sibling accounts
have the same reset parity as production.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
test/index.test.ts (1)
361-374:⚠️ Potential issue | 🟡 MinorKeep grouped-disable mock in parity with production auth-failure reset.
disableAccountsWithSameRefreshTokenin the test mock disables accounts but does not clear per-refresh-token auth-failure tracking, unlike production. That can mask regressions in post-disable recovery paths.🧪 Suggested parity patch
class MockAccountManager { + private authFailuresByRefreshToken = new Map<string, number>(); @@ - clearAuthFailures() {} - incrementAuthFailures() { return 1; } + clearAuthFailures(account: { refreshToken?: string }) { + if (!account.refreshToken) return; + this.authFailuresByRefreshToken.delete(account.refreshToken); + } + incrementAuthFailures(account: { refreshToken?: string }) { + const refreshToken = account.refreshToken ?? ""; + const current = this.authFailuresByRefreshToken.get(refreshToken) ?? 0; + const next = current + 1; + this.authFailuresByRefreshToken.set(refreshToken, next); + return next; + } @@ disableAccountsWithSameRefreshToken(account: { refreshToken?: string }) { const refreshToken = account.refreshToken; let disabledCount = 0; @@ } + if (refreshToken) { + this.authFailuresByRefreshToken.delete(refreshToken); + } return disabledCount; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/index.test.ts` around lines 361 - 374, The test mock function disableAccountsWithSameRefreshToken currently disables matching accounts but does not clear the per-refresh-token auth-failure tracking, causing divergence from production; update disableAccountsWithSameRefreshToken to also reset/clear the auth-failure state for the given refreshToken (e.g., remove this.failedAuthByRefreshToken[refreshToken] or call this.authFailureTracker.reset(refreshToken)) in addition to deleting coolingDownUntil/cooldownReason and setting enabled/disabledReason so the mock mirrors production recovery behavior.
🧹 Nitpick comments (2)
test/accounts.test.ts (1)
1844-1916: Add one Windows-style transient lock fixture.These new retry cases only simulate
EBUSY. If the save recovery code also treats Windows lock failures likeEPERMorEACCESas transient, that branch is still uncovered and could regress unnoticed.Also applies to: 1918-2017, 2533-2601, 2780-2855, 2857-2920, 3001-3031
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/accounts.test.ts` around lines 1844 - 1916, The test only simulates a Unix-style transient lock (EBUSY); add a Windows-style transient lock fixture by inserting another mockSaveAccounts.mockImplementationOnce that rejects with a Windows error (e.g. Object.assign(new Error("EPERM: file in use"), { code: "EPERM" }) or { code: "EACCES" }) before the final successful mockImplementationOnce, and update expectations (mockSaveAccounts call count and which savedSnapshots index is asserted) so the sequence is: first pending reject via rejectFirstSave, second reject with EPERM/EACCES, third a successful save; refer to mockSaveAccounts.mockImplementationOnce, rejectFirstSave, savedSnapshots, and AccountManager.saveToDiskDebounced to locate the change.index.ts (1)
1818-1821: Extract repeated “reload manager from disk” logic into one helper.The same
if (cachedAccountManager) { loadFromDisk(); setCachedAccountManager(...) }block is duplicated in many places. A single helper reduces drift and keeps future cache-sync changes safe.♻️ Refactor sketch
+ const reloadCachedAccountManagerFromDisk = async (): Promise<void> => { + if (!cachedAccountManager) return; + const reloadedManager = await AccountManager.loadFromDisk(); + setCachedAccountManager(reloadedManager); + }; - if (cachedAccountManager) { - const reloadedManager = await AccountManager.loadFromDisk(); - setCachedAccountManager(reloadedManager); - } + await reloadCachedAccountManagerFromDisk();Also applies to: 4159-4162, 5216-5219, 5460-5463, 5560-5563, 5637-5640, 5953-5956, 6047-6050
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@index.ts` around lines 1818 - 1821, Extract the repeated "reload manager from disk" logic into a single helper: create a function (e.g. reloadCachedAccountManagerIfPresent or ensureCachedAccountManagerReloaded) that checks cachedAccountManager, calls AccountManager.loadFromDisk(), and invokes setCachedAccountManager(reloadedManager); then replace every duplicated block (instances referencing cachedAccountManager, AccountManager.loadFromDisk, setCachedAccountManager) with a call to that helper so all reload behavior is centralized and future cache-sync changes only need to touch the helper.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/accounts.ts`:
- Around line 1073-1075: hasPendingSave() currently returns true only if
saveDebounceTimer or pendingSave are non-null and ignores the saveRetryNeeded
flag, so after a failed save it can wrongly report no pending work; update the
hasPendingSave() method to also check this.saveRetryNeeded (e.g., return
this.saveDebounceTimer !== null || this.pendingSave !== null ||
this.saveRetryNeeded === true) so flushPendingSave()/manager-settled/prune logic
will treat retry-needed state as pending work.
In `@test/accounts.test.ts`:
- Around line 951-969: The test currently sets stored.version = 3 so it
exercises current-version normalization only; change the fixture to use a legacy
version (set stored.version to 1 or 2) and ensure the account object lacks a
disabledReason so the migration path runs; update the assertions to verify that
the migration logic produces the expected disabledReason and that saveAccounts
(mocked via saveAccounts) is invoked with the migrated data. Target the stored
variable in this test and keep the mocked saveAccounts behavior
(mockResolvedValueOnce) while asserting the migration result.
- Around line 39-41: The "flushes pending debounced save" test is missing a
mockClear, causing stale calls to mockSaveAccounts to leak between tests; update
setup to clear the saveAccounts mock (mockSaveAccounts.mockClear()) before tests
run (either add mockSaveAccounts.mockClear() to the existing beforeEach that
currently clears accountLoggerWarn, or call mockSaveAccounts.mockClear() at the
start of the "flushes pending debounced save" test before mockResolvedValue())
so that expect(mockSaveAccounts).toHaveBeenCalled() only asserts on calls made
by the current test.
---
Duplicate comments:
In `@test/index.test.ts`:
- Around line 361-374: The test mock function
disableAccountsWithSameRefreshToken currently disables matching accounts but
does not clear the per-refresh-token auth-failure tracking, causing divergence
from production; update disableAccountsWithSameRefreshToken to also reset/clear
the auth-failure state for the given refreshToken (e.g., remove
this.failedAuthByRefreshToken[refreshToken] or call
this.authFailureTracker.reset(refreshToken)) in addition to deleting
coolingDownUntil/cooldownReason and setting enabled/disabledReason so the mock
mirrors production recovery behavior.
---
Nitpick comments:
In `@index.ts`:
- Around line 1818-1821: Extract the repeated "reload manager from disk" logic
into a single helper: create a function (e.g.
reloadCachedAccountManagerIfPresent or ensureCachedAccountManagerReloaded) that
checks cachedAccountManager, calls AccountManager.loadFromDisk(), and invokes
setCachedAccountManager(reloadedManager); then replace every duplicated block
(instances referencing cachedAccountManager, AccountManager.loadFromDisk,
setCachedAccountManager) with a call to that helper so all reload behavior is
centralized and future cache-sync changes only need to touch the helper.
In `@test/accounts.test.ts`:
- Around line 1844-1916: The test only simulates a Unix-style transient lock
(EBUSY); add a Windows-style transient lock fixture by inserting another
mockSaveAccounts.mockImplementationOnce that rejects with a Windows error (e.g.
Object.assign(new Error("EPERM: file in use"), { code: "EPERM" }) or { code:
"EACCES" }) before the final successful mockImplementationOnce, and update
expectations (mockSaveAccounts call count and which savedSnapshots index is
asserted) so the sequence is: first pending reject via rejectFirstSave, second
reject with EPERM/EACCES, third a successful save; refer to
mockSaveAccounts.mockImplementationOnce, rejectFirstSave, savedSnapshots, and
AccountManager.saveToDiskDebounced to locate the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f72d1c86-7c7d-48cd-86cb-d99cd6a2b947
📒 Files selected for processing (4)
index.tslib/accounts.tstest/accounts.test.tstest/index.test.ts
| hasPendingSave(): boolean { | ||
| return this.saveDebounceTimer !== null || this.pendingSave !== null; | ||
| } |
There was a problem hiding this comment.
Track retry-needed state as pending work.
hasPendingSave() ignores saveRetryNeeded. After a failed save, Line 1074 can return false even though flushPendingSave() still has retry work, which can let manager-settled/prune logic drop an instance before retry/flush.
💡 Suggested fix
hasPendingSave(): boolean {
- return this.saveDebounceTimer !== null || this.pendingSave !== null;
+ return this.saveRetryNeeded || this.saveDebounceTimer !== null || this.pendingSave !== null;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| hasPendingSave(): boolean { | |
| return this.saveDebounceTimer !== null || this.pendingSave !== null; | |
| } | |
| hasPendingSave(): boolean { | |
| return this.saveRetryNeeded || this.saveDebounceTimer !== null || this.pendingSave !== null; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/accounts.ts` around lines 1073 - 1075, hasPendingSave() currently returns
true only if saveDebounceTimer or pendingSave are non-null and ignores the
saveRetryNeeded flag, so after a failed save it can wrongly report no pending
work; update the hasPendingSave() method to also check this.saveRetryNeeded
(e.g., return this.saveDebounceTimer !== null || this.pendingSave !== null ||
this.saveRetryNeeded === true) so flushPendingSave()/manager-settled/prune logic
will treat retry-needed state as pending work.
| beforeEach(() => { | ||
| accountLoggerWarn.mockClear(); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the file structure and locate test/accounts.test.ts
fd -t f "accounts.test.ts" | head -20Repository: ndycode/oc-chatgpt-multi-auth
Length of output: 94
🏁 Script executed:
# Check the beginning of the test file to see the imports and mock setup
head -60 test/accounts.test.tsRepository: ndycode/oc-chatgpt-multi-auth
Length of output: 1985
🏁 Script executed:
# Look at lines 39-41 specifically and the surrounding context
sed -n '30,50p' test/accounts.test.tsRepository: ndycode/oc-chatgpt-multi-auth
Length of output: 656
🏁 Script executed:
# Get more context around the saveAccounts mock - let's see how it's defined
rg "saveAccounts" test/accounts.test.ts -B 2 -A 2 | head -50Repository: ndycode/oc-chatgpt-multi-auth
Length of output: 1906
🏁 Script executed:
# Now check around line 2317 where the flush assertion is mentioned
sed -n '2310,2325p' test/accounts.test.tsRepository: ndycode/oc-chatgpt-multi-auth
Length of output: 585
🏁 Script executed:
# Let's look at the full "flushes pending debounced save" test
sed -n '2310,2370p' test/accounts.test.tsRepository: ndycode/oc-chatgpt-multi-auth
Length of output: 2032
🏁 Script executed:
# Let's also check if there's any test that uses saveAccounts without calling mockClear
rg "saveAccounts\)" test/accounts.test.ts | grep -v "mockClear" | head -20Repository: ndycode/oc-chatgpt-multi-auth
Length of output: 1208
🏁 Script executed:
# Let's check what comes before and after line 2317 more carefully to find the flush test
sed -n '2300,2350p' test/accounts.test.tsRepository: ndycode/oc-chatgpt-multi-auth
Length of output: 1642
Add mockClear() to the "flushes pending debounced save" test.
The "flushes pending debounced save" test (line ~2317) does not clear saveAccounts before asserting on it. Unlike all other tests in the suite, it calls mockResolvedValue() without first calling mockClear(). Since beforeEach only clears accountLoggerWarn, the assertion expect(mockSaveAccounts).toHaveBeenCalled() can pass on stale calls from previous tests instead of only the current invocation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/accounts.test.ts` around lines 39 - 41, The "flushes pending debounced
save" test is missing a mockClear, causing stale calls to mockSaveAccounts to
leak between tests; update setup to clear the saveAccounts mock
(mockSaveAccounts.mockClear()) before tests run (either add
mockSaveAccounts.mockClear() to the existing beforeEach that currently clears
accountLoggerWarn, or call mockSaveAccounts.mockClear() at the start of the
"flushes pending debounced save" test before mockResolvedValue()) so that
expect(mockSaveAccounts).toHaveBeenCalled() only asserts on calls made by the
current test.
| it("defaults legacy disabled accounts to a user disable reason", async () => { | ||
| const { saveAccounts } = await import("../lib/storage.js"); | ||
| const mockSaveAccounts = vi.mocked(saveAccounts); | ||
| mockSaveAccounts.mockClear(); | ||
| mockSaveAccounts.mockResolvedValueOnce(); | ||
|
|
||
| const now = Date.now(); | ||
| const stored = { | ||
| version: 3 as const, | ||
| activeIndex: 0, | ||
| accounts: [ | ||
| { | ||
| refreshToken: "token-1", | ||
| enabled: false, | ||
| addedAt: now, | ||
| lastUsed: now, | ||
| }, | ||
| ], | ||
| }; |
There was a problem hiding this comment.
This "legacy" case never hits the migration path.
With version: 3, this test only verifies current-version normalization. It won't catch regressions in the V1/V2 → V3 defaulting behavior this PR is adding for disabledReason.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/accounts.test.ts` around lines 951 - 969, The test currently sets
stored.version = 3 so it exercises current-version normalization only; change
the fixture to use a legacy version (set stored.version to 1 or 2) and ensure
the account object lacks a disabledReason so the migration path runs; update the
assertions to verify that the migration logic produces the expected
disabledReason and that saveAccounts (mocked via saveAccounts) is invoked with
the migrated data. Target the stored variable in this test and keep the mocked
saveAccounts behavior (mockResolvedValueOnce) while asserting the migration
result.
What:
disabledReasonso explicit login and flagged-account restore revive onlyauth-failuredisables while preserving manual disablesauth-failuredisables until fresh login, while clearing stale cooldown metadata on real manual re-enableenabledanddisabledReasonwhile preserving the canonical "omit unset optional fields" storage shapeWhy:
Validation:
npm run typechecknpm run lintnpm testnpm run buildSummary by CodeRabbit
New Features
Bug Fixes
Tests
note: greptile review for oc-chatgpt-multi-auth. cite files like
lib/foo.ts:123. confirm regression tests + windows concurrency/token redaction coverage.Greptile Summary
this pr replaces silent account removal after repeated auth failures with a persistent disable/revive flow, hardens save serialization against windows file-lock races, and adds a shutdown deadline to flush debounced writes before exit.
key changes:
disableAccountsWithSameRefreshTokenmarks accountsenabled: false, disabledReason: "auth-failure"in-memory and persists via the new serializedenqueueSavequeue instead of silently dropping them from the poolflushPendingSavegains a retry loop (max 20 iterations) that replayssaveRetryNeeded-flagged writes after transientEBUSYfailures, with an optionaldeadlineMsfor shutdown callerslib/shutdown.tsswitches fromprocess.oncetoprocess.onwith explicit deregistration, adds a 10 s windows-aware deadline beforeprocess.exit, and passesdeadlineMsto cleanup fns so the account manager flush can race the deadline cleanlypersistAccountPoolcorrectly guards onaccount.disabledReason !== "auth-failure"before clearing the disable, preserving manual user-disables across loginsmergedDisabledReasonin the account-merge path gives"user"priority over"auth-failure", preventing a stale disk state from making a user-disabled account auto-revivablenormalizeStoredEnabled,normalizeStoredAccountDisabledReason) and the V1→V3 migration default legacy disabled accounts to"user"so they are never mistakenly auto-revivedelsebranch intrySetAccountEnabled(the early-return guard above it already covers the case) and a redundant final check in theflushPendingSaveloop (both branches that could make it false are already handled withcontinueabove)Confidence Score: 4/5
Important Files Changed
disableAccountsWithSameRefreshToken, serializedenqueueSavequeue,flushPendingSaveretry loop,trySetAccountEnabledwith disabledReason guard, andgetEnabledAccountCount. contains one unreachableelsebranch and a redundant final guard influshPendingSave; windows file-lock retry logic and save serialization are solid.setCachedAccountManager, adds revival logic guarded ondisabledReason === "auth-failure", and improves zero-enabled-account error messages; module-levelaccountManagerCleanupHook ??=pattern is sound but re-registers on every plugin instantiation.process.oncewithprocess.on+ explicit deregistration, addssignalExitPendingguard to prevent double-exit, passesdeadlineMscontext to cleanup functions, and adds 10 s windows-aware timeout before force-exit; logic is correct.normalizeAccountStorageandnormalizeFlaggedStorageto callnormalizeStoredEnabled/normalizeStoredAccountDisabledReason; preserves the "omit unset optional fields" storage shape cleanly.AccountDisabledReasontype and normalization helpers; V1→V3 migration correctly defaults missing reasons to "user" while V3 normalization leaves themundefined(preventing auto-revival of legacy disables).trySetAccountEnablededge cases, andflushPendingSaveconcurrency; covers the windows file-lock retry path explicitly.Sequence Diagram
sequenceDiagram participant RT as Request Handler participant AM as AccountManager participant Disk as accounts.json (Windows FS) participant SH as shutdown.ts RT->>AM: selectAccount() [enabled only] AM-->>RT: account (enabled !== false) RT->>AM: incrementAuthFailures(account) AM-->>RT: failures >= MAX_AUTH_FAILURES_BEFORE_DISABLE RT->>AM: disableAccountsWithSameRefreshToken(account) Note over AM: sets enabled=false, disabledReason="auth-failure"<br/>clears cooldown, resets failure counter AM-->>RT: disabledCount RT->>AM: saveToDiskDebounced() Note over AM: saveRetryNeeded=true, debounce armed Note over Disk: Windows AV lock may block write AM->>Disk: enqueueSave → persistToDisk() alt EBUSY / file lock Disk-->>AM: Error (EBUSY) Note over AM: saveRetryNeeded stays true else success Disk-->>AM: ok Note over AM: saveRetryNeeded=false end Note over SH: SIGINT / SIGTERM received SH->>AM: flushPendingSave({ deadlineMs: now+10s }) loop until settled or deadline AM->>Disk: retry persistToDisk() Disk-->>AM: ok end SH->>SH: process.exit(0) Note over RT: User runs `opencode auth login` RT->>AM: persistAccountPool(results, false,<br/>{ reviveMatchingDisabledAccounts: true }) Note over AM: merge → disabledReason="auth-failure" preserved<br/>revival check clears enabled/disabledReason AM->>Disk: save (account revived)Comments Outside Diff (8)
lib/shutdown.ts, line 28-31 (link)signalExitPendingnever resets — silently swallows all future signalsonce
signalExitPending = trueis set by the first signal, it is never reset. ifprocess.exitis intercepted (mocked in tests, or process somehow survives pastrequestExit), every subsequent SIGINT/SIGTERM hits the early-return guard and is silently dropped forever.removeShutdownHandlersis also skipped inrunCleanup's.finally()whensignalExitPendingis true, so the registeredprocess.onlisteners accumulate and are never removed for that module instance.current tests use
vi.resetModules()inside signal-handler tests to get fresh state, so this doesn't bite the test suite today. but any future test that callsregisterCleanup→ triggers a signal path → then callsrunCleanup()programmatically without a module reset will find signals permanently ignored.consider also resetting the flag in the cleanup
.finally()block when!signalExitPendingis false but cleanup completed successfully without an actualprocess.exitcall (i.e., mocked-exit path).lib/accounts.ts, line 784-792 (link)resolvePendingSaveSettledWaitersskips resolution while debounce timer is still armedhasPendingSave()returnstruewhensaveDebounceTimer !== null. so if a waiter callswaitForPendingSaveToSettle()while only a debounce timer is pending (no in-flightpendingSaveyet), and the timer fires, kicks offenqueueSave, the save completes, and thensaveToDiskDebouncedis immediately called again — the waiter registered before that second arm will stay inpendingSaveSettledWaitersuntil another full save completes.in
scheduleTrackedManagerPrune, the comment says "re-registering this waiter after each settle is intentional," which handles the multi-arm case for the shutdown path. but any waiter that lands while the debounce is live must survive through at least one completeenqueueSave→.finally()cycle; if the debounce is then cancelled byflushPendingSavebefore it fires, andflushPendingSavecallsenqueueSavedirectly,resolvePendingSaveSettledWaiterswill run and correctly drain them. this is fine today, but worth a unit test covering: register waiter → debounce →flushPendingSavecancels debounce → waiter resolves without a second debounce cycle.lib/shutdown.ts, line 3 (link)5-second signal timeout may be too tight on Windows
SIGNAL_CLEANUP_TIMEOUT_MS = 5_000is the total budget for all cleanup work, includingflushPendingSave()on every tracked and active manager. on Windows with antivirus scanners, a single write can be blocked for 2-4 seconds. if there are two or more managers with pending saves (e.g., after a plugin reload), the timeout can fire before all saves flush — andprocess.exit(0)runs while disabled state is still unwritten to disk.the save-queue serialization in
flushPendingSavemeans managers flush one after another, not in parallel, so the risk compounds with manager count. consider raising this, or making it configurable:also, no test currently exercises the path where the 5s timeout races against a slow
flushPendingSave. adding a test that interceptsprocess.exitand verifies the warn-and-continue behavior on timeout would close this windows concurrency gap.lib/shutdown.ts, line 3-4 (link)Windows timeout vs flush-iteration budget mismatch
SIGNAL_CLEANUP_TIMEOUT_MS = 10_000is the hard wall for shutdown cleanup, butflushPendingSavehasMAX_FLUSH_ITERATIONS = 20and will keep re-queuing a newenqueueSaveeach iteration when a save is re-armed. each iteration awaits the previous in-flight write. on windows hosts where AV scanners hold the accounts file for 2–5 s per attempt, 20 iterations can easily exceed 10 s, causingrequestExit()to fire mid-flush and lose the newly disabled state.consider either shrinking
MAX_FLUSH_ITERATIONSor tying the flush loop to a wall-clock deadline so the two budgets stay in sync:or document that
MAX_FLUSH_ITERATIONSis intentionally set low enough that it cannot exceed the signal timeout on realistic windows hardware.index.ts, line 1688-1698 (link)scheduleTrackedManagerPrunewaiter not re-registered after settle with new pending workin the
.finallycallback ofwaitForSettle:if
managerHasPendingSave(manager)istrueat the time.finallyruns (because a new debounce save was re-armed between the settle and the callback), the manager correctly stays intrackedAccountManagersForCleanup, buttrackedManagerSettledWaitshas already been cleared.scheduleTrackedManagerPrunewill not be called again unlesssetCachedAccountManagerorinvalidateAccountManagerCachefire for this manager.in practice this is a memory-only issue — the manager is still in the tracked set and will be flushed by the shutdown hook — but the auto-pruning that was supposed to clean up the set after the work is done never fires. over many reload cycles this keeps stale
AccountManagerobjects alive until the next cache invalidation. consider re-callingscheduleTrackedManagerPrune(manager)inside the.finallywhen a new pending save is detected:lib/accounts.ts, line 1035-1040 (link)flushPendingSaveis a no-op when both timer andpendingSaveare null after a failed savethe
flushPendingSaveentry check is:after
saveToDiskDebouncedfires andenqueueSave's inner save rejects (e.g. EBUSY), the.finally()setsthis.pendingSave = null. the debounce timer was already cleared when it fired. so at shutdown, bothsaveDebounceTimer === nullandpendingSave === null.flushPendingSaveimmediately returns with nothing to flush, and the in-memory disabled state is lost on restart.the test "persists newer disabled state after two queued save failures" works because it manually re-arms the debounce before calling
flush. the production shutdown path inindex.tscallsflushPendingSavewithout re-arming. adding a no-pending-state test would make this gap visible:this is the windows AV file-lock gap: save fails silently, shutdown flush finds nothing, disabled state reloads as enabled. adding an explicit retry or a "dirty" flag on save failure would close the loop.
lib/shutdown.ts, line 88-118 (link)Second SIGINT/SIGTERM silently swallowed during cleanup window
the switch from
process.oncetoprocess.on+if (signalExitPending) returnmeans a second Ctrl+C within the 10-second cleanup window is a silent no-op. previously, a second signal would hit the OS default handler and force-kill the process — a deliberate user escape hatch. now, users have no visible feedback and no way to interrupt a hung cleanup; they must wait the fullSIGNAL_CLEANUP_TIMEOUT_MS(10 s) before the process exits via the deadline timer.consider logging a message on the second signal so the user knows cleanup is in progress:
lib/accounts.ts, line 980-985 (link)Unreachable
elsebranch — dead codethe
else { account.enabled = false; }branch can never execute. to reach it you need!enabled,!reason, andaccount.disabledReason === "auth-failure"— but that exact combination is already handled by the early return at line 966-968:so the final
elseis dead. this won't cause a runtime bug today, but it misleads future readers into thinking there is a live code path and can mask a refactor that accidentally removes the early-return guard. suggest removing the dead branch:Prompt To Fix All With AI
Last reviewed commit: b979b68
Context used: